Deep neural operators can learn nonlinear mappings between infinite-dimensional function spaces via deep neural networks. As promising surrogate solvers of partial differential equations (PDEs) for real-time prediction, deep neural operators such as deep operator networks (DeepONets) provide a new simulation paradigm in science and engineering. Pure data-driven neural operators and deep learning models, in general, are usually limited to interpolation scenarios, where new predictions utilize inputs within the support of the training set. However, in the inference stage of real-world applications, the input may lie outside the support, i.e., extrapolation is required, which may result to large errors and unavoidable failure of deep learning models. Here, we address this challenge of extrapolation for deep neural operators. First, we systematically investigate the extrapolation behavior of DeepONets by quantifying the extrapolation complexity via the 2-Wasserstein distance between two function spaces and propose a new behavior of bias-variance trade-off for extrapolation with respect to model capacity. Subsequently, we develop a complete workflow, including extrapolation determination, and we propose five reliable learning methods that guarantee a safe prediction under extrapolation by requiring additional information -- the governing PDEs of the system or sparse new observations. The proposed methods are based on either fine-tuning a pre-trained DeepONet or multifidelity learning. We demonstrate the effectiveness of the proposed framework for various types of parametric PDEs. Our systematic comparisons provide practical guidelines for selecting a proper extrapolation method depending on the available information, desired accuracy, and required inference speed.
translated by 谷歌翻译
Knowledge distillation is often used to transfer knowledge from a strong teacher model to a relatively weak student model. Traditional knowledge distillation methods include response-based methods and feature-based methods. Response-based methods are used the most widely but suffer from lower upper limit of model performance, while feature-based methods have constraints on the vocabularies and tokenizers. In this paper, we propose a tokenizer-free method liberal feature-based distillation (LEAD). LEAD aligns the distribution between teacher model and student model, which is effective, extendable, portable and has no requirements on vocabularies, tokenizer, or model architecture. Extensive experiments show the effectiveness of LEAD on several widely-used benchmarks, including MS MARCO Passage, TREC Passage 19, TREC Passage 20, MS MARCO Document, TREC Document 19 and TREC Document 20.
translated by 谷歌翻译
知识蒸馏是将知识从强大的教师转移到有效的学生模型的有效方法。理想情况下,我们希望老师越好,学生越好。但是,这种期望并不总是成真。通常,由于教师和学生之间的不可忽略的差距,更好的教师模型通过蒸馏导致不良学生。为了弥合差距,我们提出了一种渐进式蒸馏方法,以进行致密检索。产品由教师渐进式蒸馏和数据进行渐进的蒸馏组成,以逐步改善学生。我们对五个广泛使用的基准,MARCO通道,TREC Passage 19,TREC文档19,MARCO文档和自然问题进行了广泛的实验,其中POD在蒸馏方法中实现了密集检索的最新方法。代码和模型将发布。
translated by 谷歌翻译
来自运动(SFM)的结构和地面相同估计对自动驾驶和其他机器人应用至关重要。最近,使用深神经网络分别用于SFM和同住估计的深度神经网络。然而,直接应用用于地面平面的现有方法可能会失败,因为道路通常是场景的一小部分。此外,深度SFM方法的性能仍然不如传统方法。在本文中,我们提出了一种方法,了解到以端到端的方式解决这两种问题,提高两者的性能。所提出的网络由深度CNN,姿势CNN和地面CNN组成。分别深度CNN和姿势 - CNN估计致密深度图和自我运动,求解SFM,而姿势 - CNN和地下CNN,接着是相同的相同层求解地面估计问题。通过强制SFM和同情侣估计结果之间的一致性,可以使用除了由搁板分段器提供的道路分割之外的光度损耗和单独的损耗来训练整个网络以结束到结束。综合实验是在基蒂基准上进行的,与各种最先进的方法相比,展示了有希望的结果。
translated by 谷歌翻译
在本文中,我们考虑从噪声损坏的$ M $二进制测量恢复$ N $尺寸信号,并在假设目标信号具有低生成内在尺寸,即,目标信号可以通过$ l近似生成。$ -lipschitz生成器$ g:\ mathbb {r} ^ k \ lightarrow \ mathbb {r} ^ {n},k \ ll n $。虽然二进制测量模型是高度非线性的,但我们提出了最小二乘解码器并证明,最多可达$ C $,具有很高的概率,最小二乘解码器实现了急剧估计错误$ \ Mathcal {O}(\ SQRT {只要$ m \ geq \ mathcal {o}(k \ log(ln))$,只要$ m \ geq \ mathcal {o}广泛的数值模拟和具有最先进方法的比较显示了最小的方形解码器对噪声和标志翻转是强大的,如我们的理论所示。通过用正确选择的深度和宽度构造Relu网络,我们验证了(大约)的深生成点,这是独立的兴趣。
translated by 谷歌翻译
在这项工作中,我们将该算法考虑到(非线性)回归问题与$ \ ell_0 $罚款。用于$ \ ell_0 $基于$的优化问题的现有算法通常用固定的步长进行,并且选择适当的步长度取决于限制的强凸性和损耗功能的平滑度,因此难以计算计算。在Sprite的支持检测和根查找\ Cite {HJK2020}的思想中,我们提出了一种新颖且有效的数据驱动线搜索规则,以自适应地确定适当的步长。我们证明了绑定到所提出的算法的$ \ ell_2 $ error,而没有限制成本函数。在线性和逻辑回归问题中具有最先进的算法的大量数值比较显示了所提出的算法的稳定性,有效性和优越性。
translated by 谷歌翻译
Discovering governing equations of a physical system, represented by partial differential equations (PDEs), from data is a central challenge in a variety of areas of science and engineering. Current methods require either some prior knowledge (e.g., candidate PDE terms) to discover the PDE form, or a large dataset to learn a surrogate model of the PDE solution operator. Here, we propose the first solution operator learning method that only needs one PDE solution, i.e., one-shot learning. We first decompose the entire computational domain into small domains, where we learn a local solution operator, and then we find the coupled solution via either mesh-based fixed-point iteration or meshfree local-solution-operator informed neural networks. We demonstrate the effectiveness of our method on different PDEs, and our method exhibits a strong generalization property.
translated by 谷歌翻译
在本文中,我们用relu,正弦和$ 2^x $构建神经网络作为激活功能。对于$ [0,1]^d $定义的一般连续$ f $,带有连续模量$ \ omega_f(\ cdot)$,我们构造了Relu-sine- $ 2^x $网络,这些网络享受近似值$ \ MATHCAL {o }(\ omega_f(\ sqrt {d})\ cdot2^{ - m}+\ omega_ {f} \ in \ Mathbb {n}^{+} $表示与网络宽度相关的超参数。结果,我们可以构建Relu-Sine- $ 2^x $网络,其深度为$ 5 $和宽度$ \ max \ left \ weft \ {\ left \ lceil2d^{3/2} \ left(\ frac {3 \ mu}) {\ epsilon} \ right)^{1/{\ alpha}} \ right \ rceil,2 \ left \ lceil \ log_2 \ frac {3 \ mu d^{\ alpha/2}} \ rceil+2 \ right \} $ tht \ Mathcal {h} _ {\ mu}^{\ alpha}([0,1]^d)$近似$ f \以$ l^p $ norm $ p \在[1,\ infty)$中的测量,其中$ \ mathcal {h} _ {\ mu}^{\ alpha}(\ alpha}([0,1]^d)$表示H \“ $ [0,1]^d $定义的旧连续函数类,带有订单$ \ alpha \ in(0,1] $和常数$ \ mu> 0 $。因此,relu-sine- $ 2^x $网络克服了$ \ Mathcal {h} _ {\ mu}^{\ alpha}([0,1]^d)$。除了其晚餐表达能力外,由relu-sine- $ 2实施的功能,也克服了维度的诅咒。 ^x $网络是(广义)可区分的,使我们能够将SGD应用于训练。
translated by 谷歌翻译
Embedding words in vector space is a fundamental first step in state-of-the-art natural language processing (NLP). Typical NLP solutions employ pre-defined vector representations to improve generalization by co-locating similar words in vector space. For instance, Word2Vec is a self-supervised predictive model that captures the context of words using a neural network. Similarly, GLoVe is a popular unsupervised model incorporating corpus-wide word co-occurrence statistics. Such word embedding has significantly boosted important NLP tasks, including sentiment analysis, document classification, and machine translation. However, the embeddings are dense floating-point vectors, making them expensive to compute and difficult to interpret. In this paper, we instead propose to represent the semantics of words with a few defining words that are related using propositional logic. To produce such logical embeddings, we introduce a Tsetlin Machine-based autoencoder that learns logical clauses self-supervised. The clauses consist of contextual words like "black," "cup," and "hot" to define other words like "coffee," thus being human-understandable. We evaluate our embedding approach on several intrinsic and extrinsic benchmarks, outperforming GLoVe on six classification tasks. Furthermore, we investigate the interpretability of our embedding using the logical representations acquired during training. We also visualize word clusters in vector space, demonstrating how our logical embedding co-locate similar words.
translated by 谷歌翻译
Model bias triggered by long-tailed data has been widely studied. However, measure based on the number of samples cannot explicate three phenomena simultaneously: (1) Given enough data, the classification performance gain is marginal with additional samples. (2) Classification performance decays precipitously as the number of training samples decreases when there is insufficient data. (3) Model trained on sample-balanced datasets still has different biases for different classes. In this work, we define and quantify the semantic scale of classes, which is used to measure the feature diversity of classes. It is exciting to find experimentally that there is a marginal effect of semantic scale, which perfectly describes the first two phenomena. Further, the quantitative measurement of semantic scale imbalance is proposed, which can accurately reflect model bias on multiple datasets, even on sample-balanced data, revealing a novel perspective for the study of class imbalance. Due to the prevalence of semantic scale imbalance, we propose semantic-scale-balanced learning, including a general loss improvement scheme and a dynamic re-weighting training framework that overcomes the challenge of calculating semantic scales in real-time during iterations. Comprehensive experiments show that dynamic semantic-scale-balanced learning consistently enables the model to perform superiorly on large-scale long-tailed and non-long-tailed natural and medical datasets, which is a good starting point for mitigating the prevalent but unnoticed model bias.
translated by 谷歌翻译